Goto

Collaborating Authors

 one-sided unsupervised domain mapping


One-Sided Unsupervised Domain Mapping

Neural Information Processing Systems

In unsupervised domain mapping, the learner is given two unmatched datasets $A$ and $B$. The goal is to learn a mapping $G_{AB}$ that translates a sample in $A$ to the analog sample in $B$. Recent approaches have shown that when learning simultaneously both $G_{AB}$ and the inverse mapping $G_{BA}$, convincing mappings are obtained. In this work, we present a method of learning $G_{AB}$ without learning $G_{BA}$. This is done by learning a mapping that maintains the distance between a pair of samples. Moreover, good mappings are obtained, even by maintaining the distance between different parts of the same sample before and after mapping. We present experimental results that the new method not only allows for one sided mapping learning, but also leads to preferable numerical results over the existing circularity-based constraint.


Reviews: One-Sided Unsupervised Domain Mapping

Neural Information Processing Systems

This paper tackles the problem of unsupervised domain adaptation. The paper introduces a new constraint, which compares samples and enforces high cross-domain correlation between the matching distances computed in each domain. An alternative to pairwise distance is provided, for cases in which we only have access to one data sample at a time. In this case, the same rationale can be applied by splitting the images and comparing the distances between their left/right or up/down halves in both domains. The final unsupervised domain adaptation model is trained by combining previously introduced losses (adversarial loss and circularity loss) with the new distance loss, showing that the new constraint is effective and allows for one directional mapping.